Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Efficient similar exercise retrieval model based on unsupervised semantic hashing
Wei TONG, Liyang HE, Rui LI, Wei HUANG, Zhenya HUANG, Qi LIU
Journal of Computer Applications    2024, 44 (1): 206-216.   DOI: 10.11772/j.issn.1001-9081.2023091260
Abstract107)   HTML0)    PDF (1988KB)(65)       Save

Finding similar exercises aims to retrieve exercises with similar testing goals to a given query exercise from the exercise database. As online education evolves, the exercise database is growing in size, and due to the professional characteristic of the exercises, it is not easy to annotate their relations. Thus, online education systems require an efficient and unsupervised model for finding similar exercise. Unsupervised semantic hashing can map high-dimensional data to compact and efficient binary representation under the premise of unsupervised signals. However,it is inadequate to simply apply the semantic hashing model to the similar exercise retrieval model because exercise data contains rich semantic information while the representation space of binary vector is limited. To address this issue, a similar exercise retrieval model was introduced to acquire and retain crucial information. Firstly, a crucial information acquisition module was designed to acquire critical information from exercise data and a de-redundancy object loss was proposed to eliminate redundant information. Secondly, a time-aware activation function was introduced to reduce coding information loss. Thirdly, to maximize the utilization of the Hamming space, a bit balance loss and a bit independent loss were introduced to optimize the distribution of binary representation in the optimization process. Experimental results on MATH and HISTORY datasets demonstrate that the proposed model outperforms the state-of-the-art text semantic hashing model Deep Hash InfoMax (DHIM), with an average improvement of approximately 54% and 23% respectively across three recall settings. Moreover, compared to the best-performing similar exercise retrieval model QuesCo, the proposed model demonstrates a clear advantage on search efficiency.

Table and Figures | Reference | Related Articles | Metrics
Feature selection method based on self-adaptive hybrid particle swarm optimization for software defect prediction
Zhenhua YU, Zhengqi LIU, Ying LIU, Cheng GUO
Journal of Computer Applications    2023, 43 (4): 1206-1213.   DOI: 10.11772/j.issn.1001-9081.2022030444
Abstract257)   HTML6)    PDF (1910KB)(126)       Save

Feature selection is a key step in data preprocessing for software defect prediction. Aiming at the problems of existing feature selection methods such as not significant dimension reduction performance and low classification accuracy of selected optimal feature subset, a feature selection method for software defect prediction based on Self-adaptive Hybrid Particle Swarm Optimization (SHPSO) was proposed. Firstly, combined with population partition, a self-adaptive weight update strategy based on Q-learning was designed, in which Q-learning was introduced to adaptively adjust the inertia weight according to the states of the particles. Secondly, to balance the global search ability in the early stage of the algorithm and the convergence speed in the later stage, the curve adaptivity based time-varying learning factors were proposed. Finally, a hybrid location update strategy was adopted to help particles jump out of the local optimal solution as soon as possible and increase the diversity of particles. Experiments were carried out on 12 public software defect datasets. The results show that the proposed method can effectively improve the classification accuracy of software defect prediction model and reduce the dimension of feature space compared with the method using all features, the commonly used traditional feature selection methods and the mainstream feature selection methods based on intelligent optimization algorithms. Compared with Improved Salp Swarm Algorithm (ISSA), the proposed method increases the classification accuracy by about 1.60% on average and reduces the feature subset size by about 63.79% on average. Experimental results show that the proposed method can select a feature subset with high classification accuracy and small size.

Table and Figures | Reference | Related Articles | Metrics
News recommendation model with deep feature fusion injecting attention mechanism
Yuxi LIU, Yuqi LIU, Zonglin ZHANG, Zhihua WEI, Ran MIAO
Journal of Computer Applications    2022, 42 (2): 426-432.   DOI: 10.11772/j.issn.1001-9081.2021050907
Abstract578)   HTML55)    PDF (755KB)(256)       Save

When mining news features and user features, the existing news recommendation models often lack comprehensiveness since they often fail to consider the relationship between the browsed news, the change of time series, and the importance of different news to users. At the same time, the existing models also have shortcomings in more fine-grained content feature mining. Therefore, a news recommendation model with deep feature fusion injecting attention mechanism was constructed, which can comprehensively and non-redundantly conduct user characterization and extract the features of more fine-grained news fragments. Firstly, a deep learning-based method was used to deeply extract the feature matrix of news text through the Convolutional Neural Network (CNN) injecting attention mechanism. By adding time series prediction to the news that users had browsed and injecting multi-head self-attention mechanism, the interest characteristics of users were extracted. Finally, a real Chinese dataset and English dataset were used to carry out experiments with convergence time, Mean Reciprocal Rank (MRR) and normalized Discounted Cumulative Gain (nDCG) as indicators. Compared with Neural news Recommendation with Multi-head Self-attention (NRMS) and other models, on the Chinese dataset, the proposed model has the average improvement rate of nDCG from -0.22% to 4.91% and MRR from -0.82% to 3.48%. Compared with the only model with negative improvement rate, the proposed model has the convergence time reduced by 7.63%. on the English dataset, the proposed model has the improvement rates reached 0.07% to 1.75% and 0.03% to 1.30% respectively on nDCG and MRR; At the same time this model always has fast convergence speed. Results of ablation experiments show that adding attention mechanism and time series prediction module is effective.

Table and Figures | Reference | Related Articles | Metrics
Parallel chain consensus algorithm optimization scheme based on Boneh-Lynn-Shacham aggregate signature technology
Qi LIU, Rongxin GUO, Wenxian JIANG, Dengji MA
Journal of Computer Applications    2022, 42 (12): 3785-3791.   DOI: 10.11772/j.issn.1001-9081.2021101711
Abstract272)   HTML13)    PDF (1481KB)(91)       Save

At present, each consensus node of the parallel chain needs to send its own consensus transaction to the main chain to participate in the consensus. As a result, a large number of consensus transactions occupy the block capacity of the main chain seriously and waste transaction fees. In order to solve the above problems, an optimization scheme of parallel chain consensus algorithm based on BLS (Boneh-Lynn-Shacham) aggregate signature technology was proposed by combining bilinear map technology with the characteristics of the same consensus data and different signatures of consensus trades on parallel chain. Firstly, the transaction data was signed by the consensus node. Then, the consensus transaction was broadcasted by each node of the parallel chain and the message was synchronized internally through P2P (Peer-to-Peer) network. Finally, the consensus transactions were counted by Leader node. When the number of consensus transactions was greater than 2/3, the corresponding BLS signature data was aggregated and the transaction aggregate signature was sent to the main chain for verification. Experimental results show that compared with the original parallel chain consensus algorithm, the proposed scheme can effectively solve the problem of consensus nodes on the parallel chain repeatedly sending consensus transactions to the main chain, save transaction fees with reducing the occupancy of the storage space of the main chain, only occupy 4 KB of the storage space of the main chain and only generate a transaction fee of 0.01 BiT Yuan (BTY).

Table and Figures | Reference | Related Articles | Metrics
Multi-head attention memory network for short text sentiment classification
Yu DENG, Xiaoyu LI, Jian CUI, Qi LIU
Journal of Computer Applications    2021, 41 (11): 3132-3138.   DOI: 10.11772/j.issn.1001-9081.2021010040
Abstract550)   HTML34)    PDF (681KB)(377)       Save

With the development of social networks, it has important social value to analyze the sentiments of massive texts in the social networks. Different from ordinary text classification, short text sentiment classification needs to mine the implicit sentiment semantic features, so it is very difficult and challenging. In order to obtain short text sentiment semantic features at a higher level, a new Multi-head Attention Memory Network (MAMN) was proposed for sentiment classification of short texts. Firstly, n-gram feature information and Ordered Neurons Long Short-Term Memory (ON-LSTM) network were used to improve the multi-head self-attention mechanism to fully extract the internal relationship of the text context, so that the model was able obtain richer text feature information. Secondly, multi-head attention mechanism was adopted to optimize the multi-hop memory network structure, so as to expand the depth of the model and mine higher level contextual internal semantic relations at the same time. A large number of experiments were carried out on Movie Review dataset (MR), Stanford Sentiment Treebank (SST)-1 and SST-2 datasets. The experimental results show that compared with the baseline models based on Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) structure and some latest works, the proposed MAMN achieves the better classification results, and the importance of multi-hop structure in performance improvement is verified.

Table and Figures | Reference | Related Articles | Metrics
Evaluation mechanism for preventing free-rider in super-node overlay
Jun Chen Jia-Qi Liu Zhi-Gang Chen
Journal of Computer Applications   
Abstract1589)      PDF (598KB)(978)       Save
Most selections of super node in P2P super-node overlay are based on physical performance without considering free-riders, which affects the efficiency in real network. An evaluation mechanism was presented, which fully considered physical distance, information exchange frequency and degree of querying similarity. The satisfaction degree was proposed to select super node and the object to send querying request, not only to improve the system efficiency but also reduce even eliminate free-rider nodes. The experiments show that the evaluation mechanism improves the querying success rate, and decreases average querying hops and delay in P2P super-node overlay.
Related Articles | Metrics